Generalization Bounds for Learning Kernels

نویسندگان

  • Corinna Cortes
  • Mehryar Mohri
  • Afshin Rostamizadeh
چکیده

This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using L1 regularization admits only a √ log p dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an L2 regularization whose dependency on p is also tight and only in p. We present similar results for Lq regularization with other values of q, and outline the relevance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalization Bounds for Learning the Kernel -

In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the kernel learning algorithms reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher ...

متن کامل

Bounds for Learning the Kernel: Rademacher Chaos Complexity

In this paper we develop a novel probabilistic generalization bound for regularized kernel learning algorithms. First, we show that generalization analysis of kernel learning algorithms reduces to investigation of the suprema of homogeneous Rademacher chaos process of order two over candidate kernels, which we refer to it as Rademacher chaos complexity. Our new methodology is based on the princ...

متن کامل

Generalization Bounds for Learning the Kernel Problem

In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the regularized kernel learning system reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rad...

متن کامل

New Generalization Bounds for Learning Kernels

This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log p dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same...

متن کامل

Improved Loss Bounds For Multiple Kernel Learning

We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and BenDavid (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010